- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources5
- Resource Type
-
0005000000000000
- More
- Availability
-
32
- Author / Contributor
- Filter by Author / Creator
-
-
Fan, Linxi (5)
-
Zhu, Yuke (4)
-
Huang, De-An (2)
-
Anandkumar, Anima (1)
-
Balaji, Yogesh (1)
-
Bastani, Osbert (1)
-
Fan, Jiaojiao (1)
-
Fei-Fei, Li (1)
-
Ganguli, Surya (1)
-
Gupta, Agrim (1)
-
Huang, Shuaiyi (1)
-
Jayaraman, Dinesh (1)
-
Jiang, Zhenyu (1)
-
Kautz, Jan (1)
-
Krähenbühl, Philipp (1)
-
Levy, Mara (1)
-
Li, Max (1)
-
Liang, William (1)
-
Liu, Ming-Yu (1)
-
Ma, Jason (1)
-
- Filter by Editor
-
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
(submitted - in Review for IEEE ICASSP-2024) (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available July 1, 2026
-
Zhao, Yue; Xue, Fuzhao; Reed, Scott; Fan, Linxi; Zhu, Yuke; Kautz, Jan; Yu, Zhiding; Krähenbühl, Philipp; Huang, De-An (, cs.CV)We introduce Quantized Language-Image Pretraining (QLIP), a visual tokenization method that combines state-of-the-art reconstruction quality with state-of-the-art zero-shot image understanding. QLIP trains a binary-spherical-quantization-based autoencoder with reconstruction and language-image alignment objectives. We are the first to show that the two objectives do not need to be at odds. We balance the two loss terms dynamically during training and show that a two-stage training pipeline effectively mixes the large-batch requirements of image-language pre-training with the memory bottleneck imposed by the reconstruction objective. We validate the effectiveness of QLIP for multimodal understanding and text-conditioned image generation with a single model. Specifically, QLIP serves as a drop-in replacement for the visual encoder for LLaVA and the image tokenizer for LlamaGen with comparable or even better performance. Finally, we demonstrate that QLIP enables a unified mixed-modality auto-regressive model for understanding and generation.more » « lessFree, publicly-accessible full text available February 7, 2026
-
Huang, Shuaiyi; Levy, Mara; Jiang, Zhenyu; Anandkumar, Anima; Zhu, Yuke; Fan, Linxi; Huang, De-An; Shrivastava, Abhinav (, IEEE)
-
Ma, Jason; Liang, William; Wang, Hung-Ju; Zhu, Yuke; Fan, Linxi; Bastani, Osbert; Jayaraman, Dinesh (, RSS)
-
Gupta, Agrim; Fan, Linxi; Ganguli, Surya; Fei-Fei, Li (, International Conference on Learning Representations)
An official website of the United States government

Full Text Available